58 research outputs found

    Towards Verifiably Ethical Robot Behaviour

    Full text link
    Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional `governor' that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a `consequence engine' that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.Comment: Presented at the 1st International Workshop on AI and Ethics, Sunday 25th January 2015, Hill Country A, Hyatt Regency Austin. Will appear in the workshop proceedings published by AAA

    On proactive, transparent and verifiable ethical reasoning for robots

    Get PDF
    Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented it has in general been only initial proofs of principle. Here we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically-critical machine reasoning should be proactive, transparent and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model, and supports proactive, transparent and verifiable ethical reasoning. To do so the reasoning component of the ethical layer uses our Python based Beliefs, Desires, Intentions (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified

    Experiments in artificial culture: from noisy imitation to storytelling robots

    Get PDF
    This paper presents a series of experiments in collective social robotics, spanning more than 10 years, with the long-term aim of building embodied models of (aspects) of cultural evolution. Initial experiments demonstrated the emergence of behavioural traditions in a group of social robots programmed to imitate each other's behaviours (we call these Copybots). These experiments show that the noisy (i.e. less than perfect fidelity) imitation that comes for free with real physical robots gives rise naturally to variation in social learning. More recent experimental work extends the robots' cognitive capabilities with simulation-based internal models, equipping them with a simple artificial theory of mind. With this extended capability we explore, in our current work, social learning not via imitation but robot-robot storytelling, in an effort to model this very human mode of cultural transmission. In this paper we give an account of the methods and inspiration for these experiments,the experiments and their results, and an outline of possible directions for this programme of research. It is our hope that this paper stimulates not only discussion but suggestions for hypotheses to test with the Storybots

    Robot narratives

    Get PDF
    There is evidence that humans understand how the world goes through narrative. We discuss what it might mean for embodied robots to understand the world, and communicate that understanding, in a similar manner. We suggest an architecture for adding narrative to robot cognition, and an experimental scenario for investigating the narrative hypothesis in a combination of physical and simulated robots

    Evolving Behaviour Trees for Swarm Robotics

    Get PDF

    Modelling a wireless connected swarm of mobile robots

    Get PDF
    It is a characteristic of swarm robotics that modelling the overall swarm behaviour in terms of the low-level behaviours of individual robots is very difficult. Yet if swarm robotics is to make the transition from the laboratory to real-world engineering realisation such models would be critical for both overall validation of algorithm correctness and detailed parameter optimisation. We seek models with predictive power: models that allow us to determine the effect of modifying parameters in individual robots on the overall swarm behaviour. This paper presents results from a study to apply the probabilistic modelling approach to a class of wireless connected swarms operating in unbounded environments. The paper proposes a probabilistic finite state machine (PFSM) that describes the network connectivity and overall macroscopic behaviour of the swarm, then develops a novel robot-centric approach to the estimation of the state transition probabilities within the PFSM. Using measured data from simulation the paper then carefully validates the PFSM model step by step, allowing us to assess the accuracy and hence the utility of the model. © Springer Science + Business Media, LLC 2008

    On embodied memetic evolution and the emergence of behavioural traditions in Robots

    Get PDF
    This paper describes ideas and initial experiments in embodied imitation using e-puck robots, developed as part of a project whose aim is to demonstrate the emergence of artificial culture in collective robot systems. Imitated behaviours (memes) will undergo variation because of the noise and heterogeneities of the robots and their sensors. Robots can select which memes to enact, and-because we have a multi-robot collective-memes are able to undergo multiple cycles of imitation, with inherited characteristics. We thus have the three evolutionary operators: variation, selection and inheritance, and-as we describe in this paper-experimental trials show that we are able to demonstrate embodied movement-meme evolution. © 2011 Springer-Verlag

    Mutual shaping in swarm robotics: User studies in fire and rescue, storage organization, and bridge inspection

    Get PDF
    Many real-world applications have been suggested in the swarm robotics literature. However, there is a general lack of understanding of what needs to be done for robot swarms to be useful and trusted by users in reality. This paper aims to investigate user perception of robot swarms in the workplace, and inform design principles for the deployment of future swarms in real-world applications. Three qualitative studies with a total of 37 participants were done across three sectors: fire and rescue, storage organization, and bridge inspection. Each study examined the users’ perceptions using focus groups and interviews. In this paper, we describe our findings regarding: the current processes and tools used in these professions and their main challenges; attitudes toward robot swarms assisting them; and the requirements that would encourage them to use robot swarms. We found that there was a generally positive reaction to robot swarms for information gathering and automation of simple processes. Furthermore, a human in the loop is preferred when it comes to decision making. Recommendations to increase trust and acceptance are related to transparency, accountability, safety, reliability, ease of maintenance, and ease of use. Finally, we found that mutual shaping, a methodology to create a bidirectional relationship between users and technology developers to incorporate societal choices in all stages of research and development, is a valid approach to increase knowledge and acceptance of swarm robotics. This paper contributes to the creation of such a culture of mutual shaping between researchers and users, toward increasing the chances of a successful deployment of robot swarms in the physical realm
    • …
    corecore